Workspace Consistency: A Programming Model for Shared Memory Parallelism
نویسندگان
چکیده
Recent interest in deterministic parallelism has yielded new deterministic programming languages, which offer promising features but require rewriting existing code, and deterministic schedulers, which emulate existing thread APIs but do not eliminate races from the basic programming model. Workspace consistency (WC) is a new synchronization and memory consistency model that offers a “naturally deterministic,” race-free programming model that can be adopted in both new or existing languages. WC’s basic semantics are inspired by—and intended to be as easily understood as— the “parallel assignment” construct in sequential languages such as Perl and JavaScript, where concurrent threads always read their inputs before writing shared outputs. Prototype implementations of a restricted form of WC already exist, supporting only strictly hierarchical fork/join-style synchronization, but this paper develops and explores the model in more detail and extends it to support non-hierarchical synchronization patterns such as producer/consumer pipelines and futures.
منابع مشابه
NestStep: Nested Parallelism and Virtual Shared Memory for the BSP model
NestStep is a parallel programming language for the BSP (bulk–synchronous–parallel) model of parallel computation. Extending the classical BSP model, NestStep supports dynamically nested parallelism by nesting of supersteps and a hierarchical processor group concept. Furthermore, NestStep adds a virtual shared memory realization in software, where memory consistency is relaxed to superstep boun...
متن کاملLeveraging MPI's One-Sided Communication Interface for Shared-Memory Programming
Hybrid parallel programming with MPI for internode communication in conjunction with a shared-memory programming model to manage intranode parallelism has become a dominant approach to scalable parallel programming. While this model provides a great deal of flexibility and performance potential, it saddles programmers with the complexity of utilizing two parallel programming systems in the same...
متن کاملRICE UNIVERSITY An Evaluation of Memory Consistency Models for Shared Memory Systems with ILP Processors by
The memory consistency model of a shared memory multiprocessor determines the extent to which memory operations may be overlapped or reordered for better perfor mance Studies on previous generation shared memory multiprocessors have shown that relaxed memory consistency models like release consistency RC can signif icantly outperform the conceptually simpler model of sequential consistency SC C...
متن کاملThe effects of memory-access ordering on multiple-issue uniprocessor performance
We study the effect of memory access ordering policies on processor performance. Relaxed ordering policies increase available instruction-level parallelism, but such policies must be evaluated subject to their effect on memory consistency — since virtually all microprocessors are designed to be compatible with shared memory multiprocessor systems, even uniprocessor desktop computers are constra...
متن کاملImplementing Distributed Shared Memory on Top of MPI: The DSMPI Library
Distributed shared memory has been recognized as an alternative programming model to exploit the parallelism in distributed memory systems since it provides a higher level of abstraction than simple message passing. DSM combines the simple programming model of shared-memory with the scalability of distributed memory machines. This paper presents DSMPI, a parallel library that runs atop of MPI a...
متن کامل